# Programming Assistance
English Picks

Inception Labs
Inception Labs is a company focused on developing diffusion-based large language models (dLLMs). Its technology is inspired by advanced image and video generation systems such as Midjourney and Sora. Through diffusion models, Inception Labs offers speeds 5-10 times faster than traditional autoregressive models, higher efficiency, and stronger generative control. Its models support parallel text generation, can correct errors and hallucinations, are suitable for multimodal tasks, and excel in reasoning and structured data generation. The company is composed of researchers and engineers from Stanford, UCLA, and Cornell University and is a pioneer in the field of diffusion models.
AI Model
68.2K

Scira
Scira is an AI-powered search engine designed to provide users with a more efficient and accurate information retrieval experience through a powerful language model and search capabilities. It supports multiple language models, such as Grok 2.0 and Claude 3.5 Sonnet, and integrates search tools like Tavily, providing features such as web search, programming code execution, and weather queries. Scira's main advantages are its simple interface and powerful feature integration, making it suitable for users who are dissatisfied with traditional search engines and want to use AI to improve search efficiency. This project is open-source and free; users can deploy it locally or use the online service provided according to their needs.
AI Search
85.8K

Llada
LLaDA is a novel diffusion model that generates text through a diffusion process, unlike traditional autoregressive models. It excels in the scalability of language generation, instruction following, context learning, dialogue capabilities, and compression capabilities. Developed by researchers from Renmin University of China and Ant Group, this 8B-parameter model is trained entirely from scratch. Its main advantage is its ability to flexibly generate text through the diffusion process, supporting various language tasks such as solving mathematical problems, code generation, translation, and multi-turn dialogues. LLaDA provides a new direction for the development of language models, especially in terms of generation quality and flexibility.
AI Model
59.9K

Deepseek Japanese
DeepSeek is an advanced language model developed by a Chinese AI lab supported by the High-Flyer Foundation. It focuses on open-source models and innovative training methods. Its R1 series models demonstrate exceptional performance in logical reasoning and problem-solving, employing reinforcement learning and a mixture-of-experts framework to optimize performance and achieve efficient training at low cost. DeepSeek's open-source strategy has fostered community innovation while sparking industry discussion on AI competition and the impact of open-source models. Its free and registration-free access further lowers the barrier to entry, making it suitable for a wide range of applications.
AI Model
55.8K

Qwen2.5 Max
Qwen2.5-Max is a large-scale Mixture-of-Expert (MoE) model that has undergone pre-training with over 200 trillion tokens, supervised fine-tuning, and reinforcement learning from human feedback. It excels in multiple benchmark tests, demonstrating robust knowledge and coding capabilities. The model is accessible via API provided by Alibaba Cloud, supporting developers across various application scenarios. Its key advantages include powerful performance, flexible deployment options, and efficient training techniques, aimed at providing smarter solutions in the field of artificial intelligence.
AI Model
481.6K
English Picks

Codename Goose
Codename Goose is a locally executed AI agent tool intended to assist developers in efficiently completing engineering tasks. It emphasizes open-source functionality and local execution to ensure users have complete control over task management. By connecting to external servers or APIs, Goose can be extended to automate complex tasks based on user requirements, allowing developers to focus on more critical work. Its open-source nature encourages contributions and innovation from the developer community, while its local operation safeguards data privacy and task execution efficiency.
Development & Tools
65.4K
Chinese Picks

Kimi K1.5
Kimi k1.5, developed by MoonshotAI, is a multimodal language model that significantly enhances performance in complex reasoning tasks through reinforcement learning and long-context extension techniques. The model has achieved industry-leading results on several benchmark tests, surpassing GPT-4o and Claude Sonnet 3.5 in mathematical reasoning tasks such as AIME and MATH-500. Its primary advantages include an efficient training framework, strong multimodal reasoning capabilities, and support for long contexts. Kimi k1.5 is mainly aimed at application scenarios requiring complex reasoning and logical analysis, such as programming assistance, mathematical problem-solving, and code generation.
Model Training and Deployment
254.5K

Qwq 32B Preview Gptqmodel 4bit Vortex V3
This product is a 4-bit quantized language model based on Qwen2.5-32B, achieving efficient inference and low resource consumption through GPTQ technology. It significantly reduces the model's storage and computational demands while maintaining high performance, making it suitable for use in resource-constrained environments. The model primarily targets applications requiring high-performance language generation, including intelligent customer service, programming assistance, and content creation. Its open-source license and flexible deployment options offer broad prospects for application in both commercial and research fields.
Chatbot
51.3K

Cursor Convo Export
Cursor Convo Export is a Cursor AI extension developed by Edwin Klesman, designed to help users export chat histories with Cursor AI to a new window or timestamped file. This plugin is particularly useful for programmers, as it allows them to save crucial instructions and information provided by the AI, such as deployment steps and architectural reasoning, for future reference. Additionally, if a conversation with Cursor is interrupted, users can use this plugin to copy the chat content into a new conversation, enabling them to continue their work. The plugin is priced at €5, has a size of 6.25 MB, and offers a 30-day money-back guarantee.
Development & Tools
56.9K

Dria Agent A 7B
Dria-Agent-a-7B is a large language model trained on the Qwen2.5-Coder series, specializing in agent applications. It utilizes a Pythonic function calling approach, offering advantages such as simultaneous multipurpose function calls, free-form reasoning and actions, and instant complex solution generation compared to traditional JSON function calls. The model has demonstrated excellent performance across various benchmarks, including the Berkeley Function Calling Leaderboard (BFCL), MMLU-Pro, and the Dria-Pythonic-Agent-Benchmark (DPAB). With 7.62 billion parameters and employing BF16 tensor type, it supports text generation tasks. Its key benefits include powerful programming assistance, efficient function calling methods, and high accuracy in specific domains. The model is suitable for applications requiring complex logic processing and multi-step task execution, such as automated programming and intelligent agents. Currently, it is available for free use on the Hugging Face platform.
Coding Assistant
50.2K
English Picks

Codestral 25.01
Codestral 25.01 is an advanced programming assistance model introduced by Mistral AI, representing cutting-edge technology in the field of programming models. This model is lightweight, fast, and proficient in over 80 programming languages, optimized for low-latency, high-frequency usage scenarios. It supports various tasks such as code completion (FIM), code correction, and test generation. With improvements in architecture and tokenization, the speed of code generation and completion is approximately twice as fast as its predecessors, making it a leader in programming tasks, particularly excelling in FIM use cases. Its main advantages include an efficient architecture, rapid code generation capabilities, and fluency in multiple programming languages, significantly enhancing developers' coding efficiency. Codestral 25.01 is currently available to developers worldwide through IDE/IDE plugin partners like Continue.dev, and supports local deployment to meet enterprise data and model residency requirements.
Coding Assistant
52.7K

Github Assistant
GitHub Assistant is an innovative programming support tool that leverages natural language processing technologies to enable users to explore and understand various GitHub code repositories using simple language inquiries. Its main advantages are usability and efficiency, allowing users to quickly obtain the necessary information without needing complex programming knowledge. The product is jointly developed by assistant-ui and relta, aiming to provide developers with a more convenient and intuitive way to explore code. GitHub Assistant is positioned as a powerful support tool for programmers, helping them better understand and utilize open-source code resources.
Coding Assistant
49.7K
Chinese Picks

Baidu AI Search
Baidu AI Search is an intelligent search platform based on artificial intelligence technology, integrating functions like search, smart creation, and image processing, aimed at enhancing user efficiency and creativity. The platform leverages Baidu's AI technology to provide convenient services suitable for various scenarios including office work, study, and design. With a strong foundation in Baidu's powerful search engine and AI technology, it aims to offer users comprehensive intelligent search solutions. Some features are available for free trial, while others may require payment.
AI search
75.6K
Chinese Picks

GLM Zero Preview
GLM-Zero-Preview is Zhizhu's first reasoning model trained based on advanced reinforcement learning technology, focusing on enhancing AI reasoning abilities, excelling in mathematical logic, code handling, and complex issues requiring deep reasoning. Compared to its base model, it significantly improves expert task performance without a major compromise on general task capabilities. Its performance is on par with OpenAI's o1-preview in evaluations such as AIME 2024, MATH500, and LiveCodeBench. Zhizhu Huazhang Technology Co., Ltd. is dedicated to enhancing model capabilities through reinforcement learning technology, and will soon launch the official version of GLM-Zero, expanding deep thinking capabilities across more technical fields.
AI Model
60.7K

O1 CODER
O1-CODER is a project aimed at replicating OpenAI's O1 model, focusing on programming tasks. This project combines Reinforcement Learning (RL) and Monte Carlo Tree Search (MCTS) techniques to enhance the model's system 2 thinking capabilities, aiming to generate more efficient and logical code. It is significant for improving programming efficiency and code quality, particularly in scenarios requiring extensive automated testing and code optimization.
Coding Assistant
50.5K

Qwen2.5 Coder 1.5B Instruct GPTQ Int4
Qwen2.5-Coder is the latest series from the Qwen large language model, focusing on code generation, reasoning, and debugging. Built upon the powerful Qwen2.5 framework, this model has been trained on 550 trillion source code, text-code correlations, and synthesized data, making it one of the leading open-source code language models today, rivaling GPT-4 in coding ability. Additionally, Qwen2.5-Coder offers comprehensive real-world application capabilities, such as code agents, enhancing coding proficiency while maintaining strengths in mathematical and general skills.
Code Reasoning
44.7K

Qwen2.5 Coder 1.5B Instruct GGUF
Qwen2.5-Coder is the latest series of the Qwen large language model, specifically designed for code generation, reasoning, and debugging. Built on the powerful Qwen2.5 framework, it has scaled training tokens to 5.5 trillion, incorporating source code, text code bases, synthetic data, among others. Qwen2.5-Coder-32B has emerged as the most advanced open-source large language model for code, matching the coding capabilities of GPT-4o. This model is a 1.5B parameter instruction-tuned version in GGUF format, featuring causal language modeling, pre-training and post-training phases, and a transformers architecture.
Code Reasoning
50.2K

Qwen2.5 Coder 1.5B Instruct AWQ
Qwen2.5-Coder is the latest series of the Qwen large language model, designed for code generation, reasoning, and fixing. Built on the powerful Qwen2.5, this model has been trained on 550 trillion source codes, text code foundations, and synthetic data, elevating its coding capabilities to the forefront of open-source code LLMs. It not only enhances coding abilities but also maintains strengths in mathematics and general capabilities.
Code Reasoning
44.7K

Qwen2.5 Coder 3B Instruct GGUF
Qwen2.5-Coder is the latest series of the Qwen large language models, focusing on code generation, reasoning, and repair. Built on the powerful Qwen2.5, it has been trained on a dataset of 550 trillion tokens including source code, code-grounded texts, and synthetic data. Qwen2.5-Coder-32B has emerged as the most advanced open-source code large language model, matching the coding capabilities of GPT-4o. In practical applications, it provides a more comprehensive foundation, such as a code agent, enhancing coding prowess while retaining advantages in math and general abilities.
Code Reasoning
46.4K

Qwen2.5 Coder 32B Instruct GPTQ Int8
Qwen2.5-Coder-32B-Instruct-GPTQ-Int8 is a large language model specifically optimized for code generation within the Qwen series, featuring 3.2 billion parameters and supporting long text processing. It is one of the most advanced models in the field of open-source code generation. The model has been further trained and optimized based on Qwen2.5, showing significant improvements in code generation, inference, and debugging, while also maintaining strengths in mathematics and general capabilities. It utilizes GPTQ 8-bit quantization technology to reduce model size and enhance operational efficiency.
Long Text Processing
47.5K

Qwen2.5 Coder 0.5B Instruct
Qwen2.5-Coder is the latest series of the Qwen large language model, focusing on code generation, reasoning, and fixing. Built on the powerful Qwen2.5 with an extended training dataset of 5.5 trillion tokens that includes source code, text code bases, and synthetic data, Qwen2.5-Coder-32B has become the leading open-source code LLM, matching GPT-4o in coding abilities. This model not only enhances coding capabilities but also maintains superiority in mathematics and general abilities, providing a comprehensive foundation for real-world applications like code assistance.
Coding Assistant
44.7K

Qwen2.5 Coder 1.5B
Qwen2.5-Coder-1.5B is a large language model in the Qwen2.5-Coder series, focusing on code generation, reasoning, and debugging. Built upon the robust Qwen2.5 architecture, this model has significantly expanded the training tokens to 5.5 trillion, incorporating source code, textual code bases, synthetic data, and more, making it a leader among open-source code LLMs, rivaling GPT-4o's coding capabilities. Moreover, Qwen2.5-Coder-1.5B has enhanced its mathematical and general capabilities, providing a more comprehensive foundation for practical applications such as code agents.
Coding Assistant
50.0K

Qwen2.5 Coder 1.5B Instruct
Qwen2.5-Coder is the latest series in the Qwen large language model family, focusing on code generation, code reasoning, and code fixing. Leveraging the powerful capabilities of Qwen2.5, the model was trained on 55 trillion source code, textual code bases, synthetic data, and more, making it a leader among open-source code generation language models, comparable in coding ability to GPT-4o. It not only enhances coding capability but also retains strengths in mathematics and general skills, providing a robust foundation for practical applications such as code agents.
Coding Assistant
48.9K

Qwen2.5 Coder 3B Instruct
Qwen2.5-Coder is the latest series of the Qwen large language model, focused on code generation, reasoning, and repair. Based on the powerful Qwen2.5, this model series significantly enhances code generation, reasoning, and repair capabilities by increasing training tokens to 5.5 trillion, including source code, text grounding, synthetic data, and more. The Qwen2.5-Coder-3B model contains 3.09B parameters, 36 layers, 16 attention heads (Q), and 2 attention heads (KV), with a total context length of 32,768 tokens. It stands out among open-source code LLMs, matching the coding capabilities of GPT-4o, and provides developers with a powerful code assistance tool.
Coding Assistant
46.4K

Qwen2.5 Coder 7B Instruct
Qwen2.5-Coder-7B-Instruct is a large language model specifically designed for code, part of the Qwen2.5-Coder series which includes six mainstream model sizes: 0.5, 1.5, 3, 7, 14, and 32 billion parameters to meet the diverse needs of developers. This model shows significant improvements in code generation, reasoning, and debugging, trained on an extensive dataset of 5.5 trillion tokens that includes source code, code-related textual data, and synthetic data. The Qwen2.5-Coder-32B represents the latest advancement in open-source code LLMs, matching the coding capabilities of GPT-4o. Moreover, it supports long context lengths of up to 128K tokens, providing a solid foundation for practical applications like code agents.
Coding Assistant
44.4K

Qwen2.5 Coder 14B
Qwen2.5-Coder-14B is a large language model in the Qwen series focused on code, encompassing various model sizes ranging from 0.5 to 32 billion parameters to meet diverse developer needs. The model shows significant improvements in code generation, reasoning, and repair, built upon the powerful Qwen2.5, with a training token expansion to 5.5 trillion, including source code, grounded text code, and synthetic data. Qwen2.5-Coder-32B has become the leading open-source code LLM, matching the coding capacity of GPT-4o. Additionally, it provides a comprehensive foundation for real-world applications such as code agents, enhancing coding abilities while maintaining advantages in mathematics and general tasks. It supports long contexts of up to 128K tokens.
Coding Assistant
48.6K

Qwen2.5 Coder 32B
Qwen2.5-Coder-32B is a code generation model based on Qwen2.5, featuring 32 billion parameters, making it one of the largest open-source code language models available today. It shows significant improvements in code generation, reasoning, and fixing, capable of handling long texts up to 128K tokens, which is suitable for practical applications such as code assistants. The model also maintains advantages in mathematical and general capabilities, supporting long text processing, thus serving as a powerful assistant for developers in code development.
Coding Assistant
48.3K

Qwen2.5 Coder Artifacts
Qwen2.5 Coder Artifacts is a collection of programming tools hosted on Hugging Face, showcasing the application of artificial intelligence in the programming realm. This product suite uses cutting-edge machine learning techniques to help developers enhance coding efficiency and optimize code quality. According to product background information, it is created and maintained by Qwen, aiming to offer developers a powerful programming assistance tool. The product is free and is focused on boosting developer productivity.
Coding Assistant
74.2K
English Picks

Alex Sidebar
Alex Sidebar is a smart sidebar plugin designed for Xcode that enhances developers' programming efficiency through various features. According to product background information, Alex Sidebar is supported by Combinator and is available for free to users during its Beta phase. It aids developers in writing code more quickly and intelligently through features like semantic search, code generation, and automatic error fixing.
Development & Tools
108.5K

Github Issue Helper Chrome Extension
The GitHub Issue Helper Chrome Extension is a Chrome browser plugin that utilizes large language models (LLM) to summarize issues on GitHub and propose potential solutions based on the issue content. The main advantage of this extension lies in its ability to automatically summarize GitHub issues and provide customization options, allowing users to further refine functionality using their LLM API keys. It serves as a powerful tool for developers and project maintainers, helping to save time and improve issue resolution efficiency. This extension is open-source on GitHub and follows the MIT license.
Development & Tools
47.7K
- 1
- 2
Featured AI Tools

Flow AI
Flow is an AI-driven movie-making tool designed for creators, utilizing Google DeepMind's advanced models to allow users to easily create excellent movie clips, scenes, and stories. The tool provides a seamless creative experience, supporting user-defined assets or generating content within Flow. In terms of pricing, the Google AI Pro and Google AI Ultra plans offer different functionalities suitable for various user needs.
Video Production
43.1K

Nocode
NoCode is a platform that requires no programming experience, allowing users to quickly generate applications by describing their ideas in natural language, aiming to lower development barriers so more people can realize their ideas. The platform provides real-time previews and one-click deployment features, making it very suitable for non-technical users to turn their ideas into reality.
Development Platform
46.1K

Listenhub
ListenHub is a lightweight AI podcast generation tool that supports both Chinese and English. Based on cutting-edge AI technology, it can quickly generate podcast content of interest to users. Its main advantages include natural dialogue and ultra-realistic voice effects, allowing users to enjoy high-quality auditory experiences anytime and anywhere. ListenHub not only improves the speed of content generation but also offers compatibility with mobile devices, making it convenient for users to use in different settings. The product is positioned as an efficient information acquisition tool, suitable for the needs of a wide range of listeners.
AI
43.6K

Minimax Agent
MiniMax Agent is an intelligent AI companion that adopts the latest multimodal technology. The MCP multi-agent collaboration enables AI teams to efficiently solve complex problems. It provides features such as instant answers, visual analysis, and voice interaction, which can increase productivity by 10 times.
Multimodal technology
45.3K
Chinese Picks

Tencent Hunyuan Image 2.0
Tencent Hunyuan Image 2.0 is Tencent's latest released AI image generation model, significantly improving generation speed and image quality. With a super-high compression ratio codec and new diffusion architecture, image generation speed can reach milliseconds, avoiding the waiting time of traditional generation. At the same time, the model improves the realism and detail representation of images through the combination of reinforcement learning algorithms and human aesthetic knowledge, suitable for professional users such as designers and creators.
Image Generation
44.2K

Openmemory MCP
OpenMemory is an open-source personal memory layer that provides private, portable memory management for large language models (LLMs). It ensures users have full control over their data, maintaining its security when building AI applications. This project supports Docker, Python, and Node.js, making it suitable for developers seeking personalized AI experiences. OpenMemory is particularly suited for users who wish to use AI without revealing personal information.
open source
43.9K

Fastvlm
FastVLM is an efficient visual encoding model designed specifically for visual language models. It uses the innovative FastViTHD hybrid visual encoder to reduce the time required for encoding high-resolution images and the number of output tokens, resulting in excellent performance in both speed and accuracy. FastVLM is primarily positioned to provide developers with powerful visual language processing capabilities, applicable to various scenarios, particularly performing excellently on mobile devices that require rapid response.
Image Processing
42.0K
Chinese Picks

Liblibai
LiblibAI is a leading Chinese AI creative platform offering powerful AI creative tools to help creators bring their imagination to life. The platform provides a vast library of free AI creative models, allowing users to search and utilize these models for image, text, and audio creations. Users can also train their own AI models on the platform. Focused on the diverse needs of creators, LiblibAI is committed to creating inclusive conditions and serving the creative industry, ensuring that everyone can enjoy the joy of creation.
AI Model
6.9M